-
Notifications
You must be signed in to change notification settings - Fork 235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix for inconsistent Bigint/Long datatype that makes dbt think there was a schema change #358
base: main
Are you sure you want to change the base?
Conversation
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: Francesco Mucio.
|
@cla-bot check |
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: Francesco Mucio.
|
The cla-bot has been summoned, and re-checked this pull request! |
@francescomucio Thanks for the PR! Could you rebase/squash your commits and force push? It looks like the first one doesn't have a git identity set, which is why the CLA bot isn't happy |
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: Francesco Mucio.
|
* More consistent results from get_columns_in_relation * Not dispatched, full name * Add changelog entry
@francescomucio sorry for the delay this is looking very good, do you think you will have time to update the branch to clear up the conflicts? would love to keep your local and remote up to date with one another. |
@McKnight-42 sorry for the late reply, I will try to work on this today. BTW I reached out to the Databricks support and they told me that the problem is also in opensource Apache Spark. Should I prepare a PR also for that adapter? |
CHANGELOG.md
Outdated
### Under the hood | ||
- Add `DBT_INVOCATION_ENV` environment variable to ODBC user agent string ([#366](https://github.com/dbt-labs/dbt-spark/pull/366)) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure how this changelog entry disappeared — mind adding it back, and adding one of your own? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ups, sure, I will do it
@McKnight-42 @jtcohen6 sorry for being late, I finally found the time to fix this PR. I guess this will go in 1.2.0, not in 1.1.1 |
@francescomucio This is looking fantastic seems once you did some pushing up the pre-commit test started failing for end-of-line spacing. if you could rerun the pre-commit command locally again and add those eof changes should fix it. |
@McKnight-42 I think I fixed it |
@francescomucio thank you so much, i'm sorry about this we just released some new versions for |
) * Not dropping table for incremental full refresh with delta * Updated changelog * Simplified conditional logic according to suggestion * Updated changelog * Only drop table if not delta table Co-authored-by: Jeremy Cohen <[email protected]> * Update changelog, trigger CircleCI tests Co-authored-by: Jeremy Cohen <[email protected]> Co-authored-by: Jeremy Cohen <[email protected]>
* Run tests for data type macros. Fine tune numeric_type * Hard code seed loading types for float + int * Repoint, fixup, changelog entry
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: Francesco Mucio.
|
a138fd7
to
7462b03
Compare
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: Francesco Mucio.
|
2 similar comments
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: Francesco Mucio.
|
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: Francesco Mucio.
|
b58adf2
to
7a652fe
Compare
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: Francesco Mucio.
|
1 similar comment
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: Francesco Mucio.
|
dd6c05c
to
d90d54e
Compare
Thank you for your pull request and welcome to our community. We could not parse the GitHub identity of the following contributors: Francesco Mucio.
|
d90d54e
to
477e537
Compare
@jtcohen6 would this be by setting the |
@McKnight-42 @jtcohen6 any update on this? We are trying to upgrade to dbt version 1.3 and I still need to apply this fix. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the status of this change?
@@ -46,7 +50,7 @@ def numeric_type(cls, dtype: str, precision: Any, scale: Any) -> str: | |||
return "{}({},{})".format("decimal", precision, scale) | |||
|
|||
def __repr__(self) -> str: | |||
return "<SparkColumn {} ({})>".format(self.name, self.data_type) | |||
return "<SparkColumn {} ({})>".format(self.name, self.translate_type(self.data_type)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.data_type
is already translated?
This PR has been marked as Stale because it has been open with no activity as of late. If you would like the PR to remain open, please comment on the PR or else it will be closed in 7 days. |
This PR has been marked as Stale because it has been open with no activity as of late. If you would like the PR to remain open, please comment on the PR or else it will be closed in 7 days. |
resolves #642
Description
This PR fixes the inconsistent datatype returned for the Bigint columns (bigint vs long) by Spark sql.
Checklist
CHANGELOG.md
and added information about my change to the "dbt-spark next" section.